Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42.598
Filtrar
Mais filtros











Intervalo de ano de publicação
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38434231

RESUMO

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Assuntos
Técnicas Histológicas , Microscopia , Animais , Citometria de Fluxo , Processamento de Imagem Assistida por Computador
2.
Sci Rep ; 14(1): 9501, 2024 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-38664436

RESUMO

The use of various kinds of magnetic resonance imaging (MRI) techniques for examining brain tissue has increased significantly in recent years, and manual investigation of each of the resulting images can be a time-consuming task. This paper presents an automatic brain-tumor diagnosis system that uses a CNN for detection, classification, and segmentation of glioblastomas; the latter stage seeks to segment tumors inside glioma MRI images. The structure of the developed multi-unit system consists of two stages. The first stage is responsible for tumor detection and classification by categorizing brain MRI images into normal, high-grade glioma (glioblastoma), and low-grade glioma. The uniqueness of the proposed network lies in its use of different levels of features, including local and global paths. The second stage is responsible for tumor segmentation, and skip connections and residual units are used during this step. Using 1800 images extracted from the BraTS 2017 dataset, the detection and classification stage was found to achieve a maximum accuracy of 99%. The segmentation stage was then evaluated using the Dice score, specificity, and sensitivity. The results showed that the suggested deep-learning-based system ranks highest among a variety of different strategies reported in the literature.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Neoplasias Encefálicas/diagnóstico , Imageamento por Ressonância Magnética/métodos , Aprendizado Profundo , Glioma/diagnóstico por imagem , Glioma/patologia , Glioma/diagnóstico , Glioblastoma/diagnóstico por imagem , Glioblastoma/diagnóstico , Glioblastoma/patologia , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Interpretação de Imagem Assistida por Computador/métodos
3.
Biomed Phys Eng Express ; 10(3)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38588646

RESUMO

Objective.In current radiograph-based intra-fraction markerless target-tracking, digitally reconstructed radiographs (DRRs) from planning CTs (CT-DRRs) are often used to train deep learning models that extract information from the intra-fraction radiographs acquired during treatment. Traditional DRR algorithms were designed for patient alignment (i.e.bone matching) and may not replicate the radiographic image quality of intra-fraction radiographs at treatment. Hypothetically, generating DRRs from pre-treatment Cone-Beam CTs (CBCT-DRRs) with DRR algorithms incorporating physical modelling of on-board-imagers (OBIs) could improve the similarity between intra-fraction radiographs and DRRs by eliminating inter-fraction variation and reducing image-quality mismatches between radiographs and DRRs. In this study, we test the two hypotheses that intra-fraction radiographs are more similar to CBCT-DRRs than CT-DRRs, and that intra-fraction radiographs are more similar to DRRs from algorithms incorporating physical models of OBI components than DRRs from algorithms omitting these models.Approach.DRRs were generated from CBCT and CT image sets collected from 20 patients undergoing pancreas stereotactic body radiotherapy. CBCT-DRRs and CT-DRRs were generated replicating the treatment position of patients and the OBI geometry during intra-fraction radiograph acquisition. To investigate whether the modelling of physical OBI components influenced radiograph-DRR similarity, four DRR algorithms were applied for the generation of CBCT-DRRs and CT-DRRs, incorporating and omitting different combinations of OBI component models. The four DRR algorithms were: a traditional DRR algorithm, a DRR algorithm with source-spectrum modelling, a DRR algorithm with source-spectrum and detector modelling, and a DRR algorithm with source-spectrum, detector and patient material modelling. Similarity between radiographs and matched DRRs was quantified using Pearson's correlation and Czekanowski's index, calculated on a per-image basis. Distributions of correlations and indexes were compared to test each of the hypotheses. Distribution differences were determined to be statistically significant when Wilcoxon's signed rank test and the Kolmogorov-Smirnov two sample test returnedp≤ 0.05 for both tests.Main results.Intra-fraction radiographs were more similar to CBCT-DRRs than CT-DRRs for both metrics across all algorithms, with allp≤ 0.007. Source-spectrum modelling improved radiograph-DRR similarity for both metrics, with allp< 10-6. OBI detector modelling and patient material modelling did not influence radiograph-DRR similarity for either metric.Significance.Generating DRRs from pre-treatment CBCT-DRRs is feasible, and incorporating CBCT-DRRs into markerless target-tracking methods may promote improved target-tracking accuracies. Incorporating source-spectrum modelling into a treatment planning system's DRR algorithms may reinforce the safe treatment of cancer patients by aiding in patient alignment.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Neoplasias Pancreáticas , Radiocirurgia , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Radiocirurgia/métodos , Neoplasias Pancreáticas/radioterapia , Neoplasias Pancreáticas/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Aprendizado Profundo , Tomografia Computadorizada por Raios X/métodos , Pâncreas/diagnóstico por imagem , Pâncreas/cirurgia , Imagens de Fantasmas
4.
Biomed Phys Eng Express ; 10(3)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38588648

RESUMO

Objective. Ultrasound-assisted orthopaedic navigation held promise due to its non-ionizing feature, portability, low cost, and real-time performance. To facilitate the applications, it was critical to have accurate and real-time bone surface segmentation. Nevertheless, the imaging artifacts and low signal-to-noise ratios in the tomographical B-mode ultrasound (B-US) images created substantial challenges in bone surface detection. In this study, we presented an end-to-end lightweight US bone segmentation network (UBS-Net) for bone surface detection.Approach. We presented an end-to-end lightweight UBS-Net for bone surface detection, using the U-Net structure as the base framework and a level set loss function for improved sensitivity to bone surface detectability. A dual attention (DA) mechanism was introduced at the end of the encoder, which considered both position and channel information to obtain the correlation between the position and channel dimensions of the feature map, where axial attention (AA) replaced the traditional self-attention (SA) mechanism in the position attention module for better computational efficiency. The position attention and channel attention (CA) were combined with a two-class fusion module for the DA map. The decoding module finally completed the bone surface detection.Main Results. As a result, a frame rate of 21 frames per second (fps) in detection were achieved. It outperformed the state-of-the-art method with higher segmentation accuracy (Dice similarity coefficient: 88.76% versus 87.22%) when applied the retrospective ultrasound (US) data from 11 volunteers.Significance. The proposed UBS-Net for bone surface detection in ultrasound achieved outstanding accuracy and real-time performance. The new method out-performed the state-of-the-art methods. It had potential in US-guided orthopaedic surgery applications.


Assuntos
Processamento de Imagem Assistida por Computador , Razão Sinal-Ruído , Ultrassonografia , Humanos , Ultrassonografia/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Osso e Ossos/diagnóstico por imagem , Redes Neurais de Computação
5.
Phys Med Biol ; 69(10)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38593827

RESUMO

Objective.To address the challenge of meningioma grading, this study aims to investigate the potential value of peritumoral edema (PTE) regions and proposes a unique approach that integrates radiomics and deep learning techniques.Approach.The primary focus is on developing a transfer learning-based meningioma feature extraction model (MFEM) that leverages both vision transformer (ViT) and convolutional neural network (CNN) architectures. Additionally, the study explores the significance of the PTE region in enhancing the grading process.Main results.The proposed method demonstrates excellent grading accuracy and robustness on a dataset of 98 meningioma patients. It achieves an accuracy of 92.86%, precision of 93.44%, sensitivity of 95%, and specificity of 89.47%.Significance.This study provides valuable insights into preoperative meningioma grading by introducing an innovative method that combines radiomics and deep learning techniques. The approach not only enhances accuracy but also reduces observer subjectivity, thereby contributing to improved clinical decision-making processes.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Meningioma , Gradação de Tumores , Meningioma/diagnóstico por imagem , Meningioma/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Edema/diagnóstico por imagem , Neoplasias Meníngeas/diagnóstico por imagem , Neoplasias Meníngeas/patologia , 60570
6.
Surg Innov ; 31(3): 291-306, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38619039

RESUMO

OBJECTIVE: To propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest. METHODS: We employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method. RESULTS: The transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset. CONCLUSION: To the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.


Assuntos
Redes Neurais de Computação , Imagem Óptica , Humanos , Imagem Óptica/métodos , Neoplasias/cirurgia , Neoplasias/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Cirurgia Assistida por Computador/métodos
7.
Sci Rep ; 14(1): 8924, 2024 04 18.
Artigo em Inglês | MEDLINE | ID: mdl-38637613

RESUMO

Accurate measurement of abdominal aortic aneurysm is essential for selecting suitable stent-grafts to avoid complications of endovascular aneurysm repair. However, the conventional image-based measurements are inaccurate and time-consuming. We introduce the automated workflow including semantic segmentation with active learning (AL) and measurement using an application programming interface of computer-aided design. 300 patients underwent CT scans, and semantic segmentation for aorta, thrombus, calcification, and vessels was performed in 60-300 cases with AL across five stages using UNETR, SwinUNETR, and nnU-Net consisted of 2D, 3D U-Net, 2D-3D U-Net ensemble, and cascaded 3D U-Net. 7 clinical landmarks were automatically measured for 96 patients. In AL stage 5, 3D U-Net achieved the highest dice similarity coefficient (DSC) with statistically significant differences (p < 0.01) except from the 2D-3D U-Net ensemble and cascade 3D U-Net. SwinUNETR excelled in 95% Hausdorff distance (HD95) with significant differences (p < 0.01) except from UNETR and 3D U-Net. DSC of aorta and calcification were saturated at stage 1 and 4, whereas thrombus and vessels were continuously improved at stage 5. The segmentation time between the manual and AL-corrected segmentation using the best model (3D U-Net) was reduced to 9.51 ± 1.02, 2.09 ± 1.06, 1.07 ± 1.10, and 1.07 ± 0.97 min for the aorta, thrombus, calcification, and vessels, respectively (p < 0.001). All measurement and tortuosity ratio measured - 1.71 ± 6.53 mm and - 0.15 ± 0.25. We developed an automated workflow with semantic segmentation and measurement, demonstrating its efficiency compared to conventional methods.


Assuntos
Aneurisma da Aorta Abdominal , Implante de Prótese Vascular , Calcinose , Procedimentos Endovasculares , Trombose , Humanos , Aneurisma da Aorta Abdominal/diagnóstico por imagem , Aprendizagem Baseada em Problemas , Semântica , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
8.
Technol Cancer Res Treat ; 23: 15330338241245943, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660703

RESUMO

BACKGROUND: Hepatocellular carcinoma (HCC) is a serious health concern because of its high morbidity and mortality. The prognosis of HCC largely depends on the disease stage at diagnosis. Computed tomography (CT) image textural analysis is an image analysis technique that has emerged in recent years. OBJECTIVE: To probe the feasibility of a CT radiomic model for predicting early (stages 0, A) and intermediate (stage B) HCC using Barcelona Clinic Liver Cancer (BCLC) staging. METHODS: A total of 190 patients with stages 0, A, or B HCC according to CT-enhanced arterial and portal vein phase images were retrospectively assessed. The lesions were delineated manually to construct a region of interest (ROI) consisting of the entire tumor mass. Consequently, the textural profiles of the ROIs were extracted by specific software. Least absolute shrinkage and selection operator dimensionality reduction was used to screen the textural profiles and obtain the area under the receiver operating characteristic curve values. RESULTS: Within the test cohort, the area under the curve (AUC) values associated with arterial-phase images and BCLC stages 0, A, and B disease were 0.99, 0.98, and 0.99, respectively. The overall accuracy rate was 92.7%. The AUC values associated with portal vein phase images and BCLC stages 0, A, and B disease were 0.98, 0.95, and 0.99, respectively, with an overall accuracy of 90.9%. CONCLUSION: The CT radiomic model can be used to predict the BCLC stage of early-stage and intermediate-stage HCC.


Assuntos
Carcinoma Hepatocelular , Estudos de Viabilidade , Neoplasias Hepáticas , Estadiamento de Neoplasias , Curva ROC , Tomografia Computadorizada por Raios X , Humanos , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/patologia , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Masculino , Tomografia Computadorizada por Raios X/métodos , Feminino , Pessoa de Meia-Idade , Idoso , Estudos Retrospectivos , Prognóstico , Adulto , Processamento de Imagem Assistida por Computador/métodos , Área Sob a Curva , 60570
9.
Comput Biol Med ; 173: 108390, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569234

RESUMO

Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.


Assuntos
Imageamento Tridimensional , Neoplasias , Humanos , Imageamento Tridimensional/métodos , Raios X , Radiografia , Neoplasias/diagnóstico por imagem , Respiração , Processamento de Imagem Assistida por Computador/métodos
10.
Comput Biol Med ; 173: 108293, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574528

RESUMO

Accurately identifying the Kirsten rat sarcoma virus (KRAS) gene mutation status in colorectal cancer (CRC) patients can assist doctors in deciding whether to use specific targeted drugs for treatment. Although deep learning methods are popular, they are often affected by redundant features from non-lesion areas. Moreover, existing methods commonly extract spatial features from imaging data, which neglect important frequency domain features and may degrade the performance of KRAS gene mutation status identification. To address this deficiency, we propose a segmentation-guided Transformer U-Net (SG-Transunet) model for KRAS gene mutation status identification in CRC. Integrating the strength of convolutional neural networks (CNNs) and Transformers, SG-Transunet offers a unique approach for both lesion segmentation and KRAS mutation status identification. Specifically, for precise lesion localization, we employ an encoder-decoder to obtain segmentation results and guide the KRAS gene mutation status identification task. Subsequently, a frequency domain supplement block is designed to capture frequency domain features, integrating it with high-level spatial features extracted in the encoding path to derive advanced spatial-frequency domain features. Furthermore, we introduce a pre-trained Xception block to mitigate the risk of overfitting associated with small-scale datasets. Following this, an aggregate attention module is devised to consolidate spatial-frequency domain features with global information extracted by the Transformer at shallow and deep levels, thereby enhancing feature discriminability. Finally, we propose a mutual-constrained loss function that simultaneously constrains the segmentation mask acquisition and gene status identification process. Experimental results demonstrate the superior performance of SG-Transunet over state-of-the-art methods in discriminating KRAS gene mutation status.


Assuntos
Neoplasias Colorretais , Proteínas Proto-Oncogênicas p21(ras) , Humanos , Proteínas Proto-Oncogênicas p21(ras)/genética , Sistemas de Liberação de Medicamentos , Mutação/genética , Redes Neurais de Computação , Neoplasias Colorretais/diagnóstico por imagem , Neoplasias Colorretais/genética , Processamento de Imagem Assistida por Computador
11.
Biomed Phys Eng Express ; 10(3)2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38579691

RESUMO

Background.Modern radiation therapy technologies aim to enhance radiation dose precision to the tumor and utilize hypofractionated treatment regimens. Verifying the dose distributions associated with these advanced radiation therapy treatments remains an active research area due to the complexity of delivery systems and the lack of suitable three-dimensional dosimetry tools. Gel dosimeters are a potential tool for measuring these complex dose distributions. A prototype tabletop solid-tank fan-beam optical CT scanner for readout of gel dosimeters was recently developed. This scanner does not have a straight raypath from source to detector, thus images cannot be reconstructed using filtered backprojection (FBP) and iterative techniques are required.Purpose.To compare a subset of the top performing algorithms in terms of image quality and quantitatively determine the optimal algorithm while accounting for refraction within the optical CT system. The following algorithms were compared: Landweber, superiorized Landweber with the fast gradient projection perturbation routine (S-LAND-FGP), the fast iterative shrinkage/thresholding algorithm with total variation penalty term (FISTA-TV), a monotone version of FISTA-TV (MFISTA-TV), superiorized conjugate gradient with the nonascending perturbation routine (S-CG-NA), superiorized conjugate gradient with the fast gradient projection perturbation routine (S-CG-FGP), superiorized conjugate gradient with with two iterations of CG performed on the current iterate and the nonascending perturbation routine (S-CG-2-NA).Methods.A ray tracing simulator was developed to track the path of light rays as they traverse the different mediums of the optical CT scanner. Two clinical phantoms and several synthetic phantoms were produced and used to evaluate the reconstruction techniques under known conditions. Reconstructed images were analyzed in terms of spatial resolution, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), signal non-uniformity (SNU), mean relative difference (MRD) and reconstruction time. We developed an image quality based method to find the optimal stopping iteration window for each algorithm. Imaging data from the prototype optical CT scanner was reconstructed and analysed to determine the optimal algorithm for this application.Results.The optimal algorithms found through the quantitative scoring metric were FISTA-TV and S-CG-2-NA. MFISTA-TV was found to behave almost identically to FISTA-TV however MFISTA-TV was unable to resolve some of the synthetic phantoms. S-CG-NA showed extreme fluctuations in the SNR and CNR values. S-CG-FGP had large fluctuations in the SNR and CNR values and the algorithm has less noise reduction than FISTA-TV and worse spatial resolution than S-CG-2-NA. S-LAND-FGP had many of the same characteristics as FISTA-TV; high noise reduction and stability from over iterating. However, S-LAND-FGP has worse SNR, CNR and SNU values as well as longer reconstruction time. S-CG-2-NA has superior spatial resolution to all algorithms while still maintaining good noise reduction and is uniquely stable from over iterating.Conclusions.Both optimal algorithms (FISTA-TV and S-CG-2-NA) are stable from over iterating and have excellent edge detection with ESF MTF 50% values of 1.266 mm-1and 0.992 mm-1. FISTA-TV had the greatest noise reduction with SNR, CNR and SNU values of 424, 434 and 0.91 × 10-4, respectively. However, low spatial resolution makes FISTA-TV only viable for large field dosimetry. S-CG-2-NA has better spatial resolution than FISTA-TV with PSF and LSF MTF 50% values of 1.581 mm-1and 0.738 mm-1, but less noise reduction. S-CG-2-NA still maintains good SNR, CNR, and SNU values of 168, 158 and 1.13 × 10-4, respectively. Thus, S-CG-2-NA is a well rounded reconstruction algorithm that would be the preferable choice for small field dosimetry.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Radiometria/métodos , Razão Sinal-Ruído , Algoritmos
12.
Comput Methods Programs Biomed ; 249: 108141, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38574423

RESUMO

BACKGROUND AND OBJECTIVE: Lung tumor annotation is a key upstream task for further diagnosis and prognosis. Although deep learning techniques have promoted automation of lung tumor segmentation, there remain challenges impeding its application in clinical practice, such as a lack of prior annotation for model training and data-sharing among centers. METHODS: In this paper, we use data from six centers to design a novel federated semi-supervised learning (FSSL) framework with dynamic model aggregation and improve segmentation performance for lung tumors. To be specific, we propose a dynamically updated algorithm to deal with model parameter aggregation in FSSL, which takes advantage of both the quality and quantity of client data. Moreover, to increase the accessibility of data in the federated learning (FL) network, we explore the FAIR data principle while the previous federated methods never involve. RESULT: The experimental results show that the segmentation performance of our model in six centers is 0.9348, 0.8436, 0.8328, 0.7776, 0.8870 and 0.8460 respectively, which is superior to traditional deep learning methods and recent federated semi-supervised learning methods. CONCLUSION: The experimental results demonstrate that our method is superior to the existing FSSL methods. In addition, our proposed dynamic update strategy effectively utilizes the quality and quantity information of client data and shows efficiency in lung tumor segmentation. The source code is released on (https://github.com/GDPHMediaLab/FedDUS).


Assuntos
Algoritmos , Neoplasias Pulmonares , Humanos , Automação , Neoplasias Pulmonares/diagnóstico por imagem , Software , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
13.
PLoS One ; 19(4): e0299360, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557660

RESUMO

Ovarian cancer is a highly lethal malignancy in the field of oncology. Generally speaking, the segmentation of ovarian medical images is a necessary prerequisite for the diagnosis and treatment planning. Therefore, accurately segmenting ovarian tumors is of utmost importance. In this work, we propose a hybrid network called PMFFNet to improve the segmentation accuracy of ovarian tumors. The PMFFNet utilizes an encoder-decoder architecture. Specifically, the encoder incorporates the ViTAEv2 model to extract inter-layer multi-scale features from the feature pyramid. To address the limitation of fixed window size that hinders sufficient interaction of information, we introduce Varied-Size Window Attention (VSA) to the ViTAEv2 model to capture rich contextual information. Additionally, recognizing the significance of multi-scale features, we introduce the Multi-scale Feature Fusion Block (MFB) module. The MFB module enhances the network's capacity to learn intricate features by capturing both local and multi-scale information, thereby enabling more precise segmentation of ovarian tumors. Finally, in conjunction with our designed decoder, our model achieves outstanding performance on the MMOTU dataset. The results are highly promising, with the model achieving scores of 97.24%, 91.15%, and 87.25% in mACC, mIoU, and mDice metrics, respectively. When compared to several Unet-based and advanced models, our approach demonstrates the best segmentation performance.


Assuntos
Neoplasias Ovarianas , Feminino , Humanos , Neoplasias Ovarianas/diagnóstico por imagem , Benchmarking , Aprendizagem , Oncologia , Processamento de Imagem Assistida por Computador
14.
PLoS One ; 19(4): e0301019, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38573957

RESUMO

Automatic and accurate segmentation of medical images plays an essential role in disease diagnosis and treatment planning. Convolution neural networks have achieved remarkable results in medical image segmentation in the past decade. Meanwhile, deep learning models based on Transformer architecture also succeeded tremendously in this domain. However, due to the ambiguity of the medical image boundary and the high complexity of physical organization structures, implementing effective structure extraction and accurate segmentation remains a problem requiring a solution. In this paper, we propose a novel Dual Encoder Network named DECTNet to alleviate this problem. Specifically, the DECTNet embraces four components, which are a convolution-based encoder, a Transformer-based encoder, a feature fusion decoder, and a deep supervision module. The convolutional structure encoder can extract fine spatial contextual details in images. Meanwhile, the Transformer structure encoder is designed using a hierarchical Swin Transformer architecture to model global contextual information. The novel feature fusion decoder integrates the multi-scale representation from two encoders and selects features that focus on segmentation tasks by channel attention mechanism. Further, a deep supervision module is used to accelerate the convergence of the proposed method. Extensive experiments demonstrate that, compared to the other seven models, the proposed method achieves state-of-the-art results on four segmentation tasks: skin lesion segmentation, polyp segmentation, Covid-19 lesion segmentation, and MRI cardiac segmentation.


Assuntos
COVID-19 , Exame Físico , Humanos , Fontes de Energia Elétrica , Coração , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador
15.
Sci Rep ; 14(1): 8738, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627421

RESUMO

Brain tumor glioblastoma is a disease that is caused for a child who has abnormal cells in the brain, which is found using MRI "Magnetic Resonance Imaging" brain image using a powerful magnetic field, radio waves, and a computer to produce detailed images of the body's internal structures it is a standard diagnostic tool for a wide range of medical conditions, from detecting brain and spinal cord injuries to identifying tumors and also in evaluating joint problems. This is treatable, and by enabling the factor for happening, the factor for dissolving the dead tissues. If the brain tumor glioblastoma is untreated, the child will go to death; to avoid this, the child has to treat the brain problem using the scan of MRI images. Using the neural network, brain-related difficulties have to be resolved. It is identified to make the diagnosis of glioblastoma. This research deals with the techniques of max rationalizing and min rationalizing images, and the method of boosted division time attribute extraction has been involved in diagnosing glioblastoma. The process of maximum and min rationalization is used to recognize the Brain tumor glioblastoma in the brain images for treatment efficiency. The image segment is created for image recognition. The method of boosted division time attribute extraction is used in image recognition with the help of MRI for image extraction. The proposed boosted division time attribute extraction method helps to recognize the fetal images and find Brain tumor glioblastoma with feasible accuracy using image rationalization against the brain tumor glioblastoma diagnosis. In addition, 45% of adults are affected by the tumor, 40% of children and 5% are in death situations. To reduce this ratio, in this study, the Brain tumor glioblastoma is identified and segmented to recognize the fetal images and find the Brain tumor glioblastoma diagnosis. Then the tumor grades were analyzed using the efficient method for the imaging MRI with the diagnosis result of partially high. The accuracy of the proposed TAE-PIS system is 98.12% which is higher when compared to other methods like Genetic algorithm, Convolution neural network, fuzzy-based minimum and maximum neural network and kernel-based support vector machine respectively. Experimental results show that the proposed method archives rate of 98.12% accuracy with low response time and compared with the Genetic algorithm (GA), Convolutional Neural Network (CNN), fuzzy-based minimum and maximum neural network (Fuzzy min-max NN), and kernel-based support vector machine. Specifically, the proposed method achieves a substantial improvement of 80.82%, 82.13%, 85.61%, and 87.03% compared to GA, CNN, Fuzzy min-max NN, and kernel-based support vector machine, respectively.


Assuntos
Neoplasias Encefálicas , Glioblastoma , Adulto , Criança , Humanos , Glioblastoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Encefálicas/patologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Algoritmos
16.
Sci Rep ; 14(1): 8504, 2024 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-38605094

RESUMO

This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/diagnóstico por imagem , Estudos de Viabilidade , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
17.
Int J Med Robot ; 20(2): e2633, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38654571

RESUMO

BACKGROUND: Allergic rhinitis constitutes a widespread health concern, with traditional treatments often proving to be painful and ineffective. Acupuncture targeting the pterygopalatine fossa proves effective but is complicated due to the intricate nearby anatomy. METHODS: To enhance the safety and precision in targeting the pterygopalatine fossa, we introduce a deep learning-based model to refine the segmentation of the pterygopalatine fossa. Our model expands the U-Net framework with DenseASPP and integrates an attention mechanism for enhanced precision in the localisation and segmentation of the pterygopalatine fossa. RESULTS: The model achieves Dice Similarity Coefficient of 93.89% and 95% Hausdorff Distance of 2.53 mm with significant precision. Remarkably, it only uses 1.98 M parameters. CONCLUSIONS: Our deep learning approach yields significant advancements in localising and segmenting the pterygopalatine fossa, providing a reliable basis for guiding pterygopalatine fossa-assisted punctures.


Assuntos
Aprendizado Profundo , Fossa Pterigopalatina , Humanos , Fossa Pterigopalatina/diagnóstico por imagem , Fossa Pterigopalatina/anatomia & histologia , Algoritmos , Rinite Alérgica/diagnóstico por imagem , Rinite Alérgica/terapia , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes
18.
Artif Intell Med ; 151: 102863, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38593682

RESUMO

Hybrid volumetric medical image segmentation models, combining the advantages of local convolution and global attention, have recently received considerable attention. While mainly focusing on architectural modifications, most existing hybrid approaches still use conventional data-independent weight initialization schemes which restrict their performance due to ignoring the inherent volumetric nature of the medical data. To address this issue, we propose a learnable weight initialization approach that utilizes the available medical training data to effectively learn the contextual and structural cues via the proposed self-supervised objectives. Our approach is easy to integrate into any hybrid model and requires no external training data. Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach, leading to state-of-the-art segmentation performance. Our proposed data-dependent initialization approach performs favorably as compared to the Swin-UNETR model pretrained using large-scale datasets on multi-organ segmentation task. Our source code and models are available at: https://github.com/ShahinaKK/LWI-VMS.


Assuntos
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
19.
J Biomed Opt ; 29(4): 046001, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38585417

RESUMO

Significance: Endoscopic screening for esophageal cancer (EC) may enable early cancer diagnosis and treatment. While optical microendoscopic technology has shown promise in improving specificity, the limited field of view (<1 mm) significantly reduces the ability to survey large areas efficiently in EC screening. Aim: To improve the efficiency of endoscopic screening, we propose a novel concept of end-expandable endoscopic optical fiber probe for larger field of visualization and for the first time evaluate a deep-learning-based image super-resolution (DL-SR) method to overcome the issue of limited sampling capability. Approach: To demonstrate feasibility of the end-expandable optical fiber probe, DL-SR was applied on simulated low-resolution microendoscopic images to generate super-resolved (SR) ones. Varying the degradation model of image data acquisition, we identified the optimal parameters for optical fiber probe prototyping. The proposed screening method was validated with a human pathology reading study. Results: For various degradation parameters considered, the DL-SR method demonstrated different levels of improvement of traditional measures of image quality. The endoscopists' interpretations of the SR images were comparable to those performed on the high-resolution ones. Conclusions: This work suggests avenues for development of DL-SR-enabled sparse image reconstruction to improve high-yield EC screening and similar clinical applications.


Assuntos
Esôfago de Barrett , Aprendizado Profundo , Neoplasias Esofágicas , Humanos , Fibras Ópticas , Neoplasias Esofágicas/diagnóstico por imagem , Esôfago de Barrett/patologia , Processamento de Imagem Assistida por Computador
20.
Med Image Anal ; 94: 103149, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574542

RESUMO

The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.


Assuntos
Corantes , Neoplasias , Humanos , Corantes/química , Coloração e Rotulagem , Algoritmos , Diagnóstico por Computador , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA